Goto

Collaborating Authors

 california institute


Enhancing Fault-Tolerant Space Computing: Guidance Navigation and Control (GNC) and Landing Vision System (LVS) Implementations on Next-Gen Multi-Core Processors

Yun, Kyongsik, Bayard, David, Kubiak, Gerik, Owens, Austin, Johnson, Andrew, Johnson, Ryan, Scharf, Dan, Lu, Thomas

arXiv.org Artificial Intelligence

Future planetary exploration missions demand high-performance, fault-tolerant computing to enable autonomous Guidance, Navigation, and Control (GNC) and Lander Vision System (LVS) operations during Entry, Descent, and Landing (EDL). This paper evaluates the deployment of GNC and LVS algorithms on next-generation multi-core processors--HPSC, Snapdragon VOXL2, and AMD Xilinx Versal--demonstrating up to 15x speedup for LVS image processing and over 250x speedup for Guidance for Fuel-Optimal Large Divert (GFOLD) trajectory optimization compared to legacy spaceflight hardware. To ensure computational reliability, we present ARBITER (Asynchronous Redundant Behavior Inspection for Trusted Execution and Recovery), a Multi-Core Voting (MV) mechanism that performs real-time fault detection and correction across redundant cores. ARBITER is validated in both static optimization tasks (GFOLD) and dynamic closed-loop control (Attitude Control System). A fault injection study further identifies the gradient computation stage in GFOLD as the most sensitive to bit-level errors, motivating selective protection strategies and vector-based output arbitration. This work establishes a scalable and energy-efficient architecture for future missions, including Mars Sample Return, Enceladus Orbilander, and Ceres Sample Return, where onboard autonomy, low latency, and fault resilience are critical.


AI Is Designing Bizarre New Physics Experiments That Actually Work

WIRED

The original version of this story appeared in Quanta Magazine. There are precision measurements, and then there's the Laser Interferometer Gravitational-Wave Observatory. In each of LIGO's twin gravitational wave detectors (one in Hanford, Washington, and the other in Livingston, Louisiana), laser beams bounce back and forth down the four-kilometer arms of a giant L. When a gravitational wave passes through, the length of one arm changes relative to the other by less than the width of a proton. It's by measuring these minuscule differences--a sensitivity akin to sensing the distance to the star Alpha Centauri down to the width of a human hair--that discoveries are made. The design of the machine was decades in the making, as physicists needed to push every aspect to its absolute physical limits. Construction began in 1994 and took more than 20 years, including a four-year shutdown to improve the detectors, before LIGO detected its first gravitational wave in 2015: a ripple in the space-time fabric coming from the faraway collision of a pair of black holes.


Global Task-aware Fault Detection, Identification For On-Orbit Multi-Spacecraft Collaborative Inspection

Gupta, Akshita, Nakka, Yashwanth Kumar, Choi, Changrak, Rahmani, Amir

arXiv.org Artificial Intelligence

In this paper, we present a global-to-local task-aware fault detection and identification algorithm to detect failures in a multi-spacecraft system performing a collaborative inspection (referred to as global) task. The inspection task is encoded as a cost functional $\costH$ that informs global (task allocation and assignment) and local (agent-level) decision-making. The metric $\costH$ is a function of the inspection sensor model, and the agent full-pose. We use the cost functional $\costH$ to design a metric that compares the expected and actual performance to detect the faulty agent using a threshold. We use higher-order cost gradients $\costH$ to derive a new metric to identify the type of fault, including task-specific sensor fault, an agent-level actuator, and sensor faults. Furthermore, we propose an approach to design adaptive thresholds for each fault mentioned above to incorporate the time dependence of the inspection task. We demonstrate the efficacy of the proposed method empirically, by simulating and detecting faults (such as inspection sensor faults, actuators, and sensor faults) in a low-Earth orbit collaborative spacecraft inspection task using the metrics and the threshold designed using the global task cost $\costH$.


Visual anemometry of natural vegetation from their leaf motion

Goldshmid, Roni H., Dabiri, John O., Sader, John E.

arXiv.org Artificial Intelligence

High-resolution, near-ground wind-speed data are critical for improving the accuracy of weather predictions and climate models,$^{1-3}$ supporting wildfire control efforts,$^{4-7}$ and ensuring the safe passage of airplanes during takeoff and landing maneouvers.$^{8,9}$ Quantitative wind speed anemometry generally employs on-site instrumentation for accurate single-position data or sophisticated remote techniques such as Doppler radar for quantitative field measurements. It is widely recognized that the wind-induced motion of vegetation depends in a complex manner on their structure and mechanical properties, obviating their use in quantitative anemometry.$^{10-14}$ We analyze measurements on a host of different vegetation showing that leaf motion can be decoupled from the leaf's branch and support structure, at low-to-moderate wind speed, $U_{wind}$. This wind speed range is characterized by a leaf Reynolds number, enabling the development of a remote, quantitative anemometry method based on the formula, $U_{wind}\approx740\sqrt{μU_{leaf}/ρD}$, that relies only on the leaf size $D$, its measured fluctuating (RMS) speed $U_{leaf}$, the air viscosity $μ$, and its mass density $ρ$. This formula is corroborated by a first-principles model and validated using a host of laboratory and field tests on diverse vegetation types, ranging from oak, olive, and magnolia trees through to camphor and bullgrass. The findings of this study open the door to a new paradigm in anemometry, using natural vegetation to enable remote and rapid quantitative field measurements at global locations with minimal cost.


Planning, scheduling, and execution on the Moon: the CADRE technology demonstration mission

Rabideau, Gregg, Russino, Joseph, Branch, Andrew, Dhamani, Nihal, Vaquero, Tiago Stegun, Chien, Steve, de la Croix, Jean-Pierre, Rossi, Federico

arXiv.org Artificial Intelligence

NASA's Cooperative Autonomous Distributed Robotic Exploration (CADRE) mission, slated for flight to the Moon's Reiner Gamma region in 2025/2026, is designed to demonstrate multi-agent autonomous exploration of the Lunar surface and sub-surface. A team of three robots and a base station will autonomously explore a region near the lander, collecting the data required for 3D reconstruction of the surface with no human input; and then autonomously perform distributed sensing with multi-static ground penetrating radars (GPR), driving in formation while performing coordinated radar soundings to create a map of the subsurface. At the core of CADRE's software architecture is a novel autonomous, distributed planning, scheduling, and execution (PS&E) system. The system coordinates the robots' activities, planning and executing tasks that require multiple robots' participation while ensuring that each individual robot's thermal and power resources stay within prescribed bounds, and respecting ground-prescribed sleep-wake cycles. The system uses a centralized-planning, distributed-execution paradigm, and a leader election mechanism ensures robustness to failures of individual agents. In this paper, we describe the architecture of CADRE's PS&E system; discuss its design rationale; and report on verification and validation (V&V) testing of the system on CADRE's hardware in preparation for deployment on the Moon.


Opening the Black Box of 3D Reconstruction Error Analysis with VECTOR

Fygenson, Racquel, Jawad, Kazi, Li, Isabel, Ayoub, Francois, Deen, Robert G., Davidoff, Scott, Moritz, Dominik, Hess-Flores, Mauricio

arXiv.org Artificial Intelligence

This is the author's version of the article that has been published in the proceedings of IEEE Visualization conference. The final version of this record is available at: xx.xxxx/TVCG.201x.xxxxxxx/ This metric also provides no visibility into how particular Reconstruction of 3D scenes from 2D images is a technical challenge images, lighting conditions, camera positions, or details of the that impacts domains from Earth and planetary sciences and morphology of the remote environment might interact to create inaccuracies space exploration to augmented and virtual reality. The impact of these unknowns algorithms first identify common features across images compounds in domains where high accuracy terrain reconstruction and then minimize reconstruction errors after estimating the is critical to outcomes, like science or space exploration where there shape of the terrain. This bundle adjustment (BA) step optimizes is no ground truth and inaccurate reconstruction can lead to false around a single, simplifying scalar value that obfuscates many possible results or risking billion-dollar spacecraft.


Predicting Action Content On-Line and in Real Time before Action Onset -- an Intracranial Human Study Shengxuan Ye California Institute of Technology California Institute of Technology Pasadena, CA

Neural Information Processing Systems

The ability to predict action content from neural signals in real time before the action occurs has been long sought in the neuroscientific study of decision-making, agency and volition. On-line real-time (ORT) prediction is important for understanding the relation between neural correlates of decision-making and conscious, voluntary action as well as for brain-machine interfaces. Here, epilepsy patients, implanted with intracranial depth microelectrodes or subdural grid electrodes for clinical purposes, participated in a "matching-pennies" game against an opponent. In each trial, subjects were given a 5 s countdown, after which they had to raise their left or right hand immediately as the "go" signal appeared on a computer screen. They won a fixed amount of money if they raised a different hand than their opponent and lost that amount otherwise.


ROAMER: Robust Offroad Autonomy using Multimodal State Estimation with Radar Velocity Integration

Nissov, Morten, Khattak, Shehryar, Edlund, Jeffrey A., Padgett, Curtis, Alexis, Kostas, Spieler, Patrick

arXiv.org Artificial Intelligence

Reliable offroad autonomy requires low-latency, high-accuracy state estimates of pose as well as velocity, which remain viable throughout environments with sub-optimal operating conditions for the utilized perception modalities. As state estimation remains a single point of failure system in the majority of aspiring autonomous systems, failing to address the environmental degradation the perception sensors could potentially experience given the operating conditions, can be a mission-critical shortcoming. In this work, a method for integration of radar velocity information in a LiDAR-inertial odometry solution is proposed, enabling consistent estimation performance even with degraded LiDAR-inertial odometry. The proposed method utilizes the direct velocity-measuring capabilities of an Frequency Modulated Continuous Wave (FMCW) radar sensor to enhance the LiDAR-inertial smoother solution onboard the vehicle through integration of the forward velocity measurement into the graph-based smoother. This leads to increased robustness in the overall estimation solution, even in the absence of LiDAR data. This method was validated by hardware experiments conducted onboard an all-terrain vehicle traveling at high speed, ~12 m/s, in demanding offroad environments.


Gordon Moore, Intel co-founder who predicted rise of the PC, dies at 94

The Guardian

Intel Corp co-founder Gordon Moore, a pioneer in the semiconductor industry whose "Moore's Law" predicted a steady rise in computing power for decades, has died at the age of 94, the company announced. Intel and Moore's family philanthropic foundation said he died on Friday surrounded by family at his home in Hawaii. Co-launching Intel in 1968, Moore was the rolled-up-sleeves engineer within a triumvirate of technology luminaries that eventually put "Intel Inside" processors in more than 80% of the world's personal computers. In an article he wrote in 1965, Moore observed that, thanks to improvements in technology, the number of transistors on microchips had roughly doubled every year since integrated circuits were invented a few years before. His prediction that the trend would continue became known as "Moore's Law" and, later amended to every two years, it helped push Intel and rival chipmakers to aggressively target their research and development resources to make sure that rule of thumb came true.


ShadowNav: Crater-Based Localization for Nighttime and Permanently Shadowed Region Lunar Navigation

Cauligi, Abhishek, Swan, R. Michael, Ono, Masahiro, Daftry, Shreyansh, Elliott, John, Matthies, Larry, Atha, Deegan

arXiv.org Artificial Intelligence

There has been an increase in interest in missions that drive significantly longer distances per day than what has currently been performed. Further, some of these proposed missions require autonomous driving and absolute localization in darkness. For example, the Endurance A mission proposes to drive 1200km of its total traverse at night. The lack of natural light available during such missions limits what can be used as visual landmarks and the range at which landmarks can be observed. In order for planetary rovers to traverse long ranges, onboard absolute localization is critical to the ability of the rover to maintain its planned trajectory and avoid known hazardous regions. Currently, to accomplish absolute localization, a ground in the loop (GITL) operation is performed wherein a human operator matches local maps or images from onboard with orbital images and maps. This GITL operation limits the distance that can be driven in a day to a few hundred meters, which is the distance that the rover can maintain acceptable localization error via relative methods. Previous work has shown that using craters as landmarks is a promising approach for performing absolute localization on the moon during the day. In this work we present a method of absolute localization that utilizes craters as landmarks and matches detected crater edges on the surface with known craters in orbital maps. We focus on a localization method based on a perception system which has an external illuminator and a stereo camera. We evaluate (1) both monocular and stereo based surface crater edge detection techniques, (2) methods of scoring the crater edge matches for optimal localization, and (3) localization performance on simulated Lunar surface imagery at night. We demonstrate that this technique shows promise for maintaining absolute localization error of less than 10m required for most planetary rover missions.